procedure planning
PlanLLM: Video Procedure Planning with Refinable Large Language Models
Yang, Dejie, Zhao, Zijing, Liu, Yang
Video procedure planning, i.e., planning a sequence of action steps given the video frames of start and goal states, is an essential ability for embodied AI. Recent works utilize Large Language Models (LLMs) to generate enriched action step description texts to guide action step decoding. Although LLMs are introduced, these methods decode the action steps into a closed-set of one-hot vectors, limiting the model's capability of generalizing to new steps or tasks. Additionally, fixed action step descriptions based on world-level commonsense may contain noise in specific instances of visual states. In this paper, we propose PlanLLM, a cross-modal joint learning framework with LLMs for video procedure planning. We propose an LLM-Enhanced Planning module which fully uses the generalization ability of LLMs to produce free-form planning output and to enhance action step decoding. We also propose Mutual Information Maximization module to connect world-level commonsense of step descriptions and sample-specific information of visual states, enabling LLMs to employ the reasoning ability to generate step sequences. With the assistance of LLMs, our method can both closed-set and open vocabulary procedure planning tasks. Our PlanLLM achieves superior performance on three benchmarks, demonstrating the effectiveness of our designs.
RAP: Retrieval-Augmented Planner for Adaptive Procedure Planning in Instructional Videos
Zare, Ali, Niu, Yulei, Ayyubi, Hammad, Chang, Shih-fu
Procedure Planning in instructional videos entails generating a sequence of action steps based on visual observations of the initial and target states. Despite the rapid progress in this task, there remain several critical challenges to be solved: (1) Adaptive procedures: Prior works hold an unrealistic assumption that the number of action steps is known and fixed, leading to non-generalizable models in real-world scenarios where the sequence length varies. (2) Temporal relation: Understanding the step temporal relation knowledge is essential in producing reasonable and executable plans. (3) Annotation cost: Annotating instructional videos with step-level labels (i.e., timestamp) or sequence-level labels (i.e., action category) is demanding and labor-intensive, limiting its generalizability to large-scale datasets.In this work, we propose a new and practical setting, called adaptive procedure planning in instructional videos, where the procedure length is not fixed or pre-determined. To address these challenges we introduce Retrieval-Augmented Planner (RAP) model. Specifically, for adaptive procedures, RAP adaptively determines the conclusion of actions using an auto-regressive model architecture. For temporal relation, RAP establishes an external memory module to explicitly retrieve the most relevant state-action pairs from the training videos and revises the generated procedures. To tackle high annotation cost, RAP utilizes a weakly-supervised learning manner to expand the training dataset to other task-relevant, unannotated videos by generating pseudo labels for action steps. Experiments on CrossTask and COIN benchmarks show the superiority of RAP over traditional fixed-length models, establishing it as a strong baseline solution for adaptive procedure planning.
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Education > Educational Technology > Media (1.00)
- Education > Educational Technology > Audio & Video (1.00)
SCHEMA: State CHangEs MAtter for Procedure Planning in Instructional Videos
Niu, Yulei, Guo, Wenliang, Chen, Long, Lin, Xudong, Chang, Shih-Fu
We study the problem of procedure planning in instructional videos, which aims to make a goal-oriented sequence of action steps given partial visual state observations. The motivation of this problem is to learn a structured and plannable state and action space. Recent works succeeded in sequence modeling of steps with only sequence-level annotations accessible during training, which overlooked the roles of states in the procedures. In this work, we point out that State CHangEs MAtter (SCHEMA) for procedure planning in instructional videos. We aim to establish a more structured state space by investigating the causal relations between steps and states in procedures. Specifically, we explicitly represent each step as state changes and track the state changes in procedures. For step representation, we leveraged the commonsense knowledge in large language models (LLMs) to describe the state changes of steps via our designed chain-of-thought prompting. For state change tracking, we align visual state observations with language state descriptions via cross-modal contrastive learning, and explicitly model the intermediate states of the procedure using LLM-generated state descriptions. Experiments on CrossTask, COIN, and NIV benchmark datasets demonstrate that our proposed SCHEMA model achieves state-of-the-art performance and obtains explainable visualizations. Humans are natural experts in procedure planning, i.e., arranging a sequence of instruction steps to achieve a specific goal. Procedure planning is an essential and fundamental reasoning ability for embodied AI systems and is crucial in complicated real-world problems like robotic navigation (Tellex et al., 2011; Jansen, 2020; Brohan et al., 2022). Instruction steps in procedural tasks are commonly state-modifying actions that induce state changes of objects. For example, for the task of "grilling steak", a raw steak would be first topped with pepper after "seasoning the steak", then placed on the grill before "closing the lid", and become cooked pieces after "cutting the steak". These before-states and after-states reflect fine-grained information like shape, color, size, and location of entities. Therefore, the planning agents need to figure out both the temporal relations between action steps and the causal relations between steps and states.
- North America > United States (0.14)
- Asia > China > Hong Kong (0.04)
- Research Report (1.00)
- Workflow (0.93)
- Instructional Material > Course Syllabus & Notes (0.82)
- Education > Educational Technology > Audio & Video (0.92)
- Education > Educational Technology > Media (0.83)
Procedure Planning in Instructional Videos via Contextual Modeling and Model-based Policy Learning
Bi, Jing, Luo, Jiebo, Xu, Chenliang
Learning new skills by observing humans' behaviors is an essential capability of AI. In this work, we leverage instructional videos to study humans' decision-making processes, focusing on learning a model to plan goal-directed actions in real-life videos. In contrast to conventional action recognition, goal-directed actions are based on expectations of their outcomes requiring causal knowledge of potential consequences of actions. Thus, integrating the environment structure with goals is critical for solving this task. Previous works learn a single world model will fail to distinguish various tasks, resulting in an ambiguous latent space; planning through it will gradually neglect the desired outcomes since the global information of the future goal degrades quickly as the procedure evolves. We address these limitations with a new formulation of procedure planning and propose novel algorithms to model human behaviors through Bayesian Inference and model-based Imitation Learning. Experiments conducted on real-world instructional videos show that our method can achieve state-of-the-art performance in reaching the indicated goals. Furthermore, the learned contextual information presents interesting features for planning in a latent space.
- North America > United States (0.14)
- Asia > China > Guangxi Province > Nanning (0.04)
- Instructional Material > Course Syllabus & Notes (0.69)
- Research Report (0.64)
- Workflow (0.46)
- Education > Educational Technology > Audio & Video (0.91)
- Education > Educational Technology > Media (0.81)